Change potentially dangerous LLMs prompt injection to caution#70
Change potentially dangerous LLMs prompt injection to caution#70misha-plus wants to merge 1 commit intomeithecatte:masterfrom
Conversation
|
xD |
|
( do they know ) |
|
All reasonable LLM vendors properly sandbox the bullshit generators they ship to their users.1 The current wording of the AGENTS.md therefore doesn't pose any actual danger,2 and merely serves to send a stronger signal than the wording you suggest. In light of the above, I do not consider any action to be necessary here; those who have expressed their opinions above seem to agree. 1 Proof: due to the fundamental nature of LLMs, it is impossible to give any sound guarantees about their output, and one must consider said output to be fully controlled by anyone who has any control over any part of the prompt, context, or training data. Therefore, any vendor who does not ship a proper sandbox is simply not reasonable by any possible meaning of the word. 2 Should an unreasonable LLM vendor exist, there is still no new danger being introduced by the current contents of the AGENTS.md file. |
Hello!
I found potentially dangerous prompt for LLMs in
AGENTS.md, so I've changed it to LLM usage caution with the policy described inCONTRIBUTING.md